-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ScheduleOnly initPolicy #59
Conversation
pf.hasStaleAdmintoolsConf = true | ||
// We can't reliably set compat21NodeName because the operator didn't | ||
// originate the install. We will intentionally leave that blank. | ||
pf.compat21NodeName = "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No change required, just trying to confirm my understanding of the functionalities: does this mean re_ip will not be automated on the schedule only pods (Edit: I was asking about if there could be a case where some pods are managed by the operator starting from installation while some are not, but since there can be only one init policy so I think the behavior is consistent in terms of that either the operator manages the entire cluster from the very beginning or the operator only cares about schedule only pods, so there's no case of some pods being managed by the operators starting from installation but some are not). I recall compat21NodeName is used for re_ip when db is down but would like to double check if my memory is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvm, I saw the changes in restart_reconcile and I think my question is answered.
Thanks for taking a look @ningdeng |
This introduces a new initPolicy called ScheduleOnly. The bootstrap of the database, either create_db or revive_db, is not handled. Use this policy when you have a vertica cluster running outside of Kubernetes and you want to provision new nodes to run inside Kubernetes.
Most of the automation is disabled when running in this mode. The only automation that is done is attempting to restart any down pods with 'admintools -t restart_node'. The user is resonsible for adding the pods as nodes to a vertica cluster (update_vertica), adding them to a database (admintools -t db_add_node), and handling any restart of the cluster (admintools -t re_ip/start_db).
Here is a sample CR:
Notice that the entire
.spec.communal
section is omitted.The number of pods that are created is dictated by the size of each subcluster. However, subclusters aren't added by the operator. We group by subcluster to control the name of each of the pod. The actual subcluster the pod is part of does not have to match the name in the CR.